Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Vis Comput Graph ; 29(3): 1638-1650, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34780329

RESUMEN

Data visualizations have been increasingly used in oral presentations to communicate data patterns to the general public. Clear verbal introductions of visualizations to explain how to interpret the visually encoded information are essential to convey the takeaways and avoid misunderstandings. We contribute a series of studies to investigate how to effectively introduce visualizations to the audience with varying degrees of visualization literacy. We begin with understanding how people are introducing visualizations. We crowdsource 110 introductions of visualizations and categorize them based on their content and structures. From these crowdsourced introductions, we identify different introduction strategies and generate a set of introductions for evaluation. We conduct experiments to systematically compare the effectiveness of different introduction strategies across four visualizations with 1,080 participants. We find that introductions explaining visual encodings with concrete examples are the most effective. Our study provides both qualitative and quantitative insights into how to construct effective verbal introductions of visualizations in presentations, inspiring further research in data storytelling.

2.
IEEE Trans Vis Comput Graph ; 29(1): 1211-1221, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36155465

RESUMEN

The language for expressing comparisons is often complex and nuanced, making supporting natural language-based visual comparison a non-trivial task. To better understand how people reason about comparisons in natural language, we explore a design space of utterances for comparing data entities. We identified different parameters of comparison utterances that indicate what is being compared (i.e., data variables and attributes) as well as how these parameters are specified (i.e., explicitly or implicitly). We conducted a user study with sixteen data visualization experts and non-experts to investigate how they designed visualizations for comparisons in our design space. Based on the rich set of visualization techniques observed, we extracted key design features from the visualizations and synthesized them into a subset of sixteen representative visualization designs. We then conducted a follow-up study to validate user preferences for the sixteen representative visualizations corresponding to utterances in our design space. Findings from these studies suggest guidelines and future directions for designing natural language interfaces and recommendation tools to better support natural language comparisons in visual analytics.


Asunto(s)
Gráficos por Computador , Interfaz Usuario-Computador , Humanos , Estudios de Seguimiento , Lenguaje
3.
IEEE Trans Vis Comput Graph ; 29(1): 624-634, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36201416

RESUMEN

Visualization research often focuses on perceptual accuracy or helping readers interpret key messages. However, we know very little about how chart designs might influence readers' perceptions of the people behind the data. Specifically, could designs interact with readers' social cognitive biases in ways that perpetuate harmful stereotypes? For example, when analyzing social inequality, bar charts are a popular choice to present outcome disparities between race, gender, or other groups. But bar charts may encourage deficit thinking, the perception that outcome disparities are caused by groups' personal strengths or deficiencies, rather than external factors. These faulty personal attributions can then reinforce stereotypes about the groups being visualized. We conducted four experiments examining design choices that influence attribution biases (and therefore deficit thinking). Crowdworkers viewed visualizations depicting social outcomes that either mask variability in data, such as bar charts or dot plots, or emphasize variability in data, such as jitter plots or prediction intervals. They reported their agreement with both personal and external explanations for the visualized disparities. Overall, when participants saw visualizations that hide within-group variability, they agreed more with personal explanations. When they saw visualizations that emphasize within-group variability, they agreed less with personal explanations. These results demonstrate that data visualizations about social inequity can be misinterpreted in harmful ways and lead to stereotyping. Design choices can influence these biases: Hiding variability tends to increase stereotyping while emphasizing variability reduces it.


Asunto(s)
Gráficos por Computador , Estereotipo , Humanos , Percepción Social
4.
IEEE Trans Vis Comput Graph ; 29(1): 1005-1015, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36166526

RESUMEN

Scientific knowledge develops through cumulative discoveries that build on, contradict, contextualize, or correct prior findings. Scientists and journalists often communicate these incremental findings to lay people through visualizations and text (e.g., the positive and negative effects of caffeine intake). Consequently, readers need to integrate diverse and contrasting evidence from multiple sources to form opinions or make decisions. However, the underlying mechanism for synthesizing information from multiple visualizations remains under-explored. To address this knowledge gap, we conducted a series of four experiments ( N=1166) in which participants synthesized empirical evidence from a pair of line charts presented sequentially. In Experiment 1, we administered a baseline condition with charts depicting no specific context where participants held no strong belief. To test for the generalizability, we introduced real-world scenarios to our visualizations in Experiment 2 and added accompanying text descriptions similar to online news articles or blog posts in Experiment 3. In all three experiments, we varied the relative direction and magnitude of line slopes within the chart pairs. We found that participants tended to weigh the positive slope more when the two charts depicted relationships in the opposite direction (e.g., one positive slope and one negative slope). Participants tended to weigh the less steep slope more when the two charts depicted relationships in the same direction (e.g., both positive). Through these experiments, we characterize participants' synthesis behaviors depending on the relationship between the information they viewed, contribute to theories describing underlying cognitive mechanisms in information synthesis, and describe design implications for data storytelling.


Asunto(s)
Gráficos por Computador , Visualización de Datos , Humanos , Comunicación
5.
IEEE Trans Vis Comput Graph ; 29(1): 493-503, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36166548

RESUMEN

When an analyst or scientist has a belief about how the world works, their thinking can be biased in favor of that belief. Therefore, one bedrock principle of science is to minimize that bias by testing the predictions of one's belief against objective data. But interpreting visualized data is a complex perceptual and cognitive process. Through two crowdsourced experiments, we demonstrate that supposedly objective assessments of the strength of a correlational relationship can be influenced by how strongly a viewer believes in the existence of that relationship. Participants viewed scatterplots depicting a relationship between meaningful variable pairs (e.g., number of environmental regulations and air quality) and estimated their correlations. They also estimated the correlation of the same scatterplots labeled instead with generic 'X' and 'Y' axes. In a separate section, they also reported how strongly they believed there to be a correlation between the meaningful variable pairs. Participants estimated correlations more accurately when they viewed scatterplots labeled with generic axes compared to scatterplots labeled with meaningful variable pairs. Furthermore, when viewers believed that two variables should have a strong relationship, they overestimated correlations between those variables by an r-value of about 0.1. When they believed that the variables should be unrelated, they underestimated the correlations by an r-value of about 0.1. While data visualizations are typically thought to present objective truths to the viewer, these results suggest that existing personal beliefs can bias even objective statistical values people extract from data.

6.
IEEE Comput Graph Appl ; 42(3): 29-38, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35671279

RESUMEN

In this Viewpoint article, we describe the persistent tensions between various camps on the "right" way to conduct evaluations in visualization. Visualization as a field is the amalgamation of cognitive and perceptual sciences and computer graphics, among others. As a result, the relatively disjointed lineages in visualization understandably approach the topic of evaluation very differently. It is both a blessing and a curse to our field. It is a blessing, because the collaboration of diverse perspectives is the breeding ground of innovation. Yet it is a curse, because as a community, we have yet to resolve an appreciation for differing perspectives on the topic of evaluation. We explicate these differing expectations and conventions to appreciate the spectrum of evaluation design decisions. We describe some guiding questions that researchers may consider when designing evaluations to navigate differing readers' evaluation expectations.


Asunto(s)
Gráficos por Computador , Proyectos de Investigación
7.
IEEE Trans Vis Comput Graph ; 28(1): 955-965, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34587056

RESUMEN

Well-designed data visualizations can lead to more powerful and intuitive processing by a viewer. To help a viewer intuitively compare values to quickly generate key takeaways, visualization designers can manipulate how data values are arranged in a chart to afford particular comparisons. Using simple bar charts as a case study, we empirically tested the comparison affordances of four common arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. We asked participants to type out what patterns they perceived in a chart and we coded their takeaways into types of comparisons. In a second study, we asked data visualization design experts to predict which arrangement they would use to afford each type of comparison and found both alignments and mismatches with our findings. These results provide concrete guidelines for how both human designers and automatic chart recommendation systems can make visualizations that help viewers extract the "right" takeaway.

8.
IEEE Trans Vis Comput Graph ; 28(12): 4515-4530, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34170828

RESUMEN

Past studies have shown that when a visualization uses pictographs to encode data, they have a positive effect on memory, engagement, and assessment of risk. However, little is known about how pictographs affect one's ability to understand a visualization, beyond memory for values and trends. We conducted two crowdsourced experiments to compare the effectiveness of using pictographs when showing part-to-whole relationships. In Experiment 1, we compared pictograph arrays to more traditional bar and pie charts. We tested participants' ability to generate high-level insights following Bloom's taxonomy of educational objectives via 6 free-response questions. We found that accuracy for extracting information and generating insights did not differ overall between the two versions. To explore the motivating differences between the designs, we conducted a second experiment where participants compared charts containing pictograph arrays to more traditional charts on 5 metrics and explained their reasoning. We found that some participants preferred the way that pictographs allowed them to envision the topic more easily, while others preferred traditional bar and pie charts because they seem less cluttered and faster to read. These results suggest that, at least in simple visualizations depicting part-to-whole relationships, the choice of using pictographs has little influence on sensemaking and insight extraction. When deciding whether to use pictograph arrays, designers should consider visual appeal, perceived comprehension time, ease of envisioning the topic, and clutteredness.


Asunto(s)
Gráficos por Computador , Humanos , Escolaridad
9.
IEEE Trans Vis Comput Graph ; 28(10): 3351-3364, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-33760737

RESUMEN

Data visualization design has a powerful effect on which patterns we see as salient and how quickly we see them. The visualization practitioner community prescribes two popular guidelines for creating clear and efficient visualizations: declutter and focus. The declutter guidelines suggest removing non-critical gridlines, excessive labeling of data values, and color variability to improve aesthetics and to maximize the emphasis on the data relative to the design itself. The focus guidelines for explanatory communication recommend including a clear headline that describes the relevant data pattern, highlighting a subset of relevant data values with a unique color, and connecting those values to written annotations that contextualize them in a broader argument. We evaluated how these recommendations impact recall of the depicted information across cluttered, decluttered, and decluttered+focused designs of six graph topics. Undergraduate students were asked to redraw previously seen visualizations, to recall their topics and main conclusions, and to rate the varied designs on aesthetics, clarity, professionalism, and trustworthiness. Decluttering designs led to higher ratings on professionalism, and adding focus to the design led to higher ratings on aesthetics and clarity. They also showed better memory for the highlighted pattern in the data, as reflected across redrawings of the original visualization and typed free-response conclusions, though we do not know whether these results would generalize beyond our memory-based tasks. The results largely empirically validate the intuitions of visualization designers and practitioners. The stimuli, data, analysis code, and Supplementary Materials are available at https://osf.io/wes9u/.


Asunto(s)
Gráficos por Computador , Visualización de Datos , Humanos , Recuerdo Mental , Escritura
10.
Artículo en Inglés | MEDLINE | ID: mdl-37015379

RESUMEN

Leaving the context of visualizations invisible can have negative impacts on understanding and transparency. While common wisdom suggests that recontextualizing visualizations with metadata (e.g., disclosing the data source or instructions for decoding the visualizations' encoding) may counter these effects, the impact remains largely unknown. To fill this gap, we conducted two experiments. In Experiment 1, we explored how chart type, topic, and user goal impacted which categories of metadata participants deemed most relevant. We presented 64 participants with four real-world visualizations. For each visualization, participants were given four goals and selected the type of metadata they most wanted from a set of 18 types. Our results indicated that participants were most interested in metadata which explained the visualization's encoding for goals related to understanding and metadata about the source of the data for assessing trustworthiness. In Experiment 2, we explored how these two types of metadata impact transparency, trustworthiness and persuasiveness, information relevance, and understanding. We asked 144 participants to explain the main message of two pairs of visualizations (one with metadata and one without); rate them on scales of transparency and relevance; and then predict the likelihood that they were selected for a presentation to policymakers. Our results suggested that visualizations with metadata were perceived as more thorough than those without metadata, but similarly relevant, accurate, clear, and complete. Additionally, we found that metadata did not impact the accuracy of the information extracted from visualizations, but may have influenced which information participants remembered as important or interesting.

11.
Artículo en Inglés | MEDLINE | ID: mdl-37015636

RESUMEN

A viewer's existing beliefs can prevent accurate reasoning with data visualizations. In particular, confirmation bias can cause people to overweigh information that confirms their beliefs, and dismiss information that disconfirms them. We tested whether confirmation bias exists when people reason with visualized data and whether certain visualization designs can elicit less biased reasoning strategies. We asked crowdworkers to solve reasoning problems that had the potential to evoke both poor reasoning strategies and confirmation bias. We created two scenarios, one in which we primed people with a belief before asking them to make a decision, and another in which people held pre-existing beliefs. The data was presented as either a table, a bar table, or a bar chart. To correctly solve the problem, participants should use a complex reasoning strategy to compare two ratios, each between two pairs of values. But participants could also be tempted to use simpler, superficial heuristics, shortcuts, or biased strategies to reason about the problem. Presenting the data in a table format helped participants reason with the correct ratio strategy while showing the data as a bar table or a bar chart led participants towards incorrect heuristics. Confirmation bias was not significantly present when beliefs were primed, but it was present when beliefs were pre-existing. Additionally, the table presentation format was more likely to afford the ratio reasoning strategy, and the use of ratio strategy was more likely to lead to the correct answer. These findings suggest that data presentation formats can affect affordances for reasoning.

12.
IEEE Trans Vis Comput Graph ; 27(2): 1054-1062, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33048726

RESUMEN

Bar charts are among the most frequently used visualizations, in part because their position encoding leads them to convey data values precisely. Yet reproductions of single bars or groups of bars within a graph can be biased. Curiously, some previous work found that this bias resulted in an overestimation of reproduced data values, while other work found an underestimation. Across three empirical studies, we offer an explanation for these conflicting findings: this discrepancy is a consequence of the differing aspect ratios of the tested bar marks. Viewers are biased to remember a bar mark as being more similar to a prototypical square, leading to an overestimation of bars with a wide aspect ratio, and an underestimation of bars with a tall aspect ratio. Experiments 1 and 2 showed that the aspect ratio of the bar marks indeed influenced the direction of this bias. Experiment 3 confirmed that this pattern of misestimation bias was present for reproductions from memory, suggesting that this bias may arise when comparing values across sequential displays or views. We describe additional visualization designs that might be prone to this bias beyond bar charts (e.g., Mekko charts and treemaps), and speculate that other visual channels might hold similar biases toward prototypical values.

13.
IEEE Trans Vis Comput Graph ; 27(2): 1117-1127, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33090954

RESUMEN

A growing number of efforts aim to understand what people see when using a visualization. These efforts provide scientific grounding to complement design intuitions, leading to more effective visualization practice. However, published visualization research currently reflects a limited set of available methods for understanding how people process visualized data. Alternative methods from vision science offer a rich suite of tools for understanding visualizations, but no curated collection of these methods exists in either perception or visualization research. We introduce a design space of experimental methods for empirically investigating the perceptual processes involved with viewing data visualizations to ultimately inform visualization design guidelines. This paper provides a shared lexicon for facilitating experimental visualization research. We discuss popular experimental paradigms, adjustment types, response types, and dependent measures used in vision science research, rooting each in visualization examples. We then discuss the advantages and limitations of each technique. Researchers can use this design space to create innovative studies and progress scientific understanding of design choices and evaluations in visualization. We highlight a history of collaborative success between visualization and vision science research and advocate for a deeper relationship between the two fields that can elaborate on and extend the methodological design space for understanding visualization and vision.

14.
Animals (Basel) ; 10(12)2020 Nov 26.
Artículo en Inglés | MEDLINE | ID: mdl-33256102

RESUMEN

The use of antibiotics for therapeutic and especially non-therapeutic purposes in livestock farms promotes the development of antibiotic resistance in previously susceptible bacteria through selective pressure. In this work, we examined E. coli isolates using the standard Kirby-Bauer disk diffusion susceptibility protocol and the CLSI standards. Companies selling retail chicken products in Los Angeles, California were grouped into three production groupings-Conventional, No Antibiotics, and Humane Family Owned. Humane Family Owned is not a federally regulated category in the United States, but shows the reader that the chicken is incubated, hatched, raised, slaughtered, and packaged by one party, ensuring that the use of antibiotics in the entire production of the chicken is known and understood. We then examined the antibiotic resistance of the E. coli isolates (n = 325) by exposing them to seven common antibiotics, and resistance was seen to two of the antibiotics, ampicillin and erythromycin. As has been shown previously, it was found that for both ampicillin and erythromycin, there was no significant difference (p > 0.05) between Conventional and USDA (United States Department of Agriculture)-certified No Antibiotics chicken. Unique to this work, we additionally found that Humane Family Owned chicken had fewer (p ≤ 0.05) antibiotic-resistant E. coli isolates than both of the previous. Although not considered directly clinically relevant, we chose to test erythromycin because of its ecological significance to the environmental antibiotic resistome, which is not generally done. To our knowledge, Humane Family Owned consumer chicken has not previously been studied for its antibiotic resistance. This work contributes to a better understanding of a potential strategy of chicken production for the overall benefit of human health, giving evidentiary support to the One Health approach implemented by the World Health Organization.

15.
IEEE Trans Vis Comput Graph ; 26(10): 3051-3062, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-31107654

RESUMEN

A viewer can extract many potential patterns from any set of visualized data values. But that means that two people can see different patterns in the same visualization, potentially leading to miscommunication. Here, we show that when people are primed to see one pattern in the data as visually salient, they believe that naïve viewers will experience the same visual salience. Participants were told one of multiple backstories about political events that affected public polling data, before viewing a graph that depicted those data. One pattern in the data was particularly visually salient to them given the backstory that they heard. They then predicted what naïve viewers would most visually salient on the visualization. They were strongly influenced by their own knowledge, despite explicit instructions to ignore it, predicting that others would find the same patterns to be most visually salient. This result reflects a psychological phenomenon known as the curse of knowledge, where an expert struggles to re-create the state of mind of a novice. The present findings show that the curse of knowledge also plagues the visual perception of data, explaining why people can fail to connect with audiences when they communicate patterns in data.

16.
IEEE Trans Vis Comput Graph ; 26(1): 853-862, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31425111

RESUMEN

Students who eat breakfast more frequently tend to have a higher grade point average. From this data, many people might confidently state that a before-school breakfast program would lead to higher grades. This is a reasoning error, because correlation does not necessarily indicate causation - X and Y can be correlated without one directly causing the other. While this error is pervasive, its prevalence might be amplified or mitigated by the way that the data is presented to a viewer. Across three crowdsourced experiments, we examined whether how simple data relations are presented would mitigate this reasoning error. The first experiment tested examples similar to the breakfast-GPA relation, varying in the plausibility of the causal link. We asked participants to rate their level of agreement that the relation was correlated, which they rated appropriately as high. However, participants also expressed high agreement with a causal interpretation of the data. Levels of support for the causal interpretation were not equally strong across visualization types: causality ratings were highest for text descriptions and bar graphs, but weaker for scatter plots. But is this effect driven by bar graphs aggregating data into two groups or by the visual encoding type? We isolated data aggregation versus visual encoding type and examined their individual effect on perceived causality. Overall, different visualization designs afford different cognitive reasoning affordances across the same data. High levels of data aggregation by graphs tend to be associated with higher perceived causality in data. Participants perceived line and dot visual encodings as more causal than bar encodings. Our results demonstrate how some visualization designs trigger stronger causal links while choosing others can help mitigate unwarranted perceptions of causality.

17.
IEEE Trans Vis Comput Graph ; 26(1): 301-310, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31425112

RESUMEN

In visual depictions of data, position (i.e., the vertical height of a line or a bar) is believed to be the most precise way to encode information compared to other encodings (e.g., hue). Not only are other encodings less precise than position, but they can also be prone to systematic biases (e.g., color category boundaries can distort perceived differences between hues). By comparison, position's high level of precision may seem to protect it from such biases. In contrast, across three empirical studies, we show that while position may be a precise form of data encoding, it can also produce systematic biases in how values are visually encoded, at least for reports of average position across a short delay. In displays with a single line or a single set of bars, reports of average positions were significantly biased, such that line positions were underestimated and bar positions were overestimated. In displays with multiple data series (i.e., multiple lines and/or sets of bars), this systematic bias still persisted. We also observed an effect of "perceptual pull", where the average position estimate for each series was 'pulled' toward the other. These findings suggest that, although position may still be the most precise form of visual data encoding, it can also be systematically biased.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...